29 research outputs found

    Splicing of concurrent upper-body motion spaces with locomotion

    Get PDF
    In this paper, we present a motion splicing technique for generating concurrent upper-body actions occurring simultaneously with the evolution of a lower-body locomotion sequence. Specifically, we show that a layered interpolation motion model generates upper-body poses while assigning different actions to each upper-body part. Hence, in the proposed motion splicing approach, it is possible to increase the number of generated motions as well as the number of desired actions that can be performed by virtual characters. Additionally, we propose an iterative motion blending solution, inverse pseudo-blending, to maintain a smooth and natural interaction between the virtual character and the virtual environment; inverse pseudo-blending is a constraint-based motion editing technique that blends the motions enclosed in a tetrahedron by minimising the distances between the end-effector positions of the actual and blended motions. Additionally, to evaluate the proposed solution, we implemented an example-based application for interactive motion splicing based on specified constraints. Finally, the generated results show that the proposed solution can be beneficially applied to interactive applications where concurrent actions of the upper-body are desired

    Data-driven techniques for animating virtual characters

    Get PDF
    One of the key goals of current research in data-driven computer animation is the synthesis of new motion sequences from existing motion data. This thesis presents three novel techniques for synthesising the motion of a virtual character from existing motion data and develops a framework of solutions to key character animation problems. The first motion synthesis technique presented is based on the character’s locomotion composition process. This technique examines the ability of synthesising a variety of character’s locomotion behaviours while easily specified constraints (footprints) are placed in the three-dimensional space. This is achieved by analysing existing motion data, and by assigning the locomotion behaviour transition process to transition graphs that are responsible for providing information about this process. However, virtual characters should also be able to animate according to different style variations. Therefore, a second technique to synthesise real-time style variations of character’s motion. A novel technique is developed that uses correlation between two different motion styles, and by assigning the motion synthesis process to a parameterised maximum a posteriori (MAP) framework retrieves the desire style content of the input motion in real-time, enhancing the realism of the new synthesised motion sequence. The third technique presents the ability to synthesise the motion of the character’s fingers either o↵-line or in real-time during the performance capture process. The advantage of both techniques is their ability to assign the motion searching process to motion features. The presented technique is able to estimate and synthesise a valid motion of the character’s fingers, enhancing the realism of the input motion. To conclude, this thesis demonstrates that these three novel techniques combine in to a framework that enables the realistic synthesis of virtual character movements, eliminating the post processing, as well as enabling fast synthesis of the required motion

    Rethinking shortest path: an energy expenditure approach

    Get PDF
    Considering that humans acting in constrained environments do not always plan according to shortest path criteria; rather, they conceptually measure the path which minimises the amount of expended energy. Hence, virtual characters should be able to execute their paths according to planning methods based not on path length but on the minimisation of actual expended energy. Thus, in this paper, we introduce a simple method that uses a formula for computing vanadium dioxide (VO2) levels, which is a proxy for the energy expended by humans during various activities

    Affective Image Sequence Viewing in Virtual Reality Theater Environment: Frontal Alpha Asymmetry Responses From Mobile EEG

    Get PDF
    Background: Numerous studies have investigated emotion in virtual reality (VR) experiences using self-reported data in order to understand valence and arousal dimensions of emotion. Objective physiological data concerning valence and arousal has been less explored. Electroencephalography (EEG) can be used to examine correlates of emotional responses such as valence and arousal in virtual reality environments. Used across varying fields of research, images are able to elicit a range of affective responses from viewers. In this study, we display image sequences with annotated valence and arousal values on a screen within a virtual reality theater environment. Understanding how brain activity responses are related to affective stimuli with known valence and arousal ratings may contribute to a better understanding of affective processing in virtual reality.Methods: We investigated frontal alpha asymmetry (FAA) responses to image sequences previously annotated with valence and arousal ratings. Twenty-four participants viewed image sequences in VR with known valence and arousal values while their brain activity was recorded. Participants wore the Oculus Quest VR headset and viewed image sequences while immersed in a virtual reality theater environment.Results: Image sequences with higher valence ratings elicited greater FAA scores than image sequences with lower valence ratings (F [1, 23] = 4.631, p = 0.042), while image sequences with higher arousal scores elicited lower FAA scores than image sequences with low arousal (F [1, 23] = 7.143, p = 0.014). The effect of valence on alpha power did not reach statistical significance (F [1, 23] = 4.170, p = 0.053). We determined that only the high valence, low arousal image sequence elicited FAA which was significantly higher than FAA recorded during baseline (t [23] = −3.166, p = 0.002), suggesting that this image sequence was the most salient for participants.Conclusion: Image sequences with higher valence, and lower arousal may lead to greater FAA responses in VR experiences. While findings suggest that FAA data may be useful in understanding associations between valence and arousal self-reported data and brain activity responses elicited from affective experiences in VR environments, additional research concerning individual differences in affective processing may be informative for the development of affective VR scenarios
    corecore